使用增强现实(AR)用于导航目的,这表明在手术手术过程中协助医生有益。这些应用通常需要知道外科手术工具和患者的姿势,以提供外科医生在任务执行过程中可以使用的视觉信息。现有的医学级跟踪系统使用放置在手术室内的红外摄像头(OR)来识别感兴趣的对象附加并计算其姿势的复古反射标记。一些市售的AR头式显示器(HMD)使用类似的摄像头进行自定位,手动跟踪和估算对象的深度。这项工作提出了一个使用AR HMD的内置摄像机来准确跟踪复古反射标记的框架,例如在手术过程中使用的标记,而无需集成任何其他组件。该框架还能够同时跟踪多个工具。我们的结果表明,横向翻译的准确度为0.09 +-0.06毫米,可以实现标记的跟踪和检测,纵向翻译的0.42 +-0.32 mm,绕垂直轴旋转的0.80 +-0.39 ver。此外,为了展示所提出的框架的相关性,我们在手术程序的背景下评估了系统的性能。该用例旨在在骨科过程中复制K-Wire插入的场景。为了进行评估,为两名外科医生和一名生物医学研究人员提供了视觉导航,每次都进行了21次注射。该用例的结果提供了与基于AR的导航程序报告的相当精度。
translated by 谷歌翻译
Covid-19的传播给世界带来了巨大的灾难,自动分割感染区域可以帮助医生快速诊断并减少工作量。但是,准确和完整的分割面临一些挑战,例如散射的感染区分布,复杂的背景噪声和模糊的分割边界。为此,在本文中,我们提出了一个新的网络,用于从CT图像(名为BCS-NET)的自动covid-19肺部感染分割,该网络考虑了边界,上下文和语义属性。 BCS-NET遵循编码器架构,更多的设计集中在解码器阶段,该阶段包括三个逐渐边界上下文 - 语义重建(BCSR)块。在每个BCSR块中,注意引导的全局上下文(AGGC)模块旨在通过突出显示重要的空间和边界位置并建模全局上下文依赖性来学习解码器最有价值的编码器功能。此外,语义指南(SG)单元通过在中间分辨率上汇总多规模的高级特征来生成语义指南图来完善解码器特征。广泛的实验表明,我们提出的框架在定性和定量上都优于现有竞争对手。
translated by 谷歌翻译
Bayesian methods, distributionally robust optimization methods, and regularization methods are three pillars of trustworthy machine learning hedging against distributional uncertainty, e.g., the uncertainty of an empirical distribution compared to the true underlying distribution. This paper investigates the connections among the three frameworks and, in particular, explores why these frameworks tend to have smaller generalization errors. Specifically, first, we suggest a quantitative definition for "distributional robustness", propose the concept of "robustness measure", and formalize several philosophical concepts in distributionally robust optimization. Second, we show that Bayesian methods are distributionally robust in the probably approximately correct (PAC) sense; In addition, by constructing a Dirichlet-process-like prior in Bayesian nonparametrics, it can be proven that any regularized empirical risk minimization method is equivalent to a Bayesian method. Third, we show that generalization errors of machine learning models can be characterized using the distributional uncertainty of the nominal distribution and the robustness measures of these machine learning models, which is a new perspective to bound generalization errors, and therefore, explain the reason why distributionally robust machine learning models, Bayesian models, and regularization models tend to have smaller generalization errors.
translated by 谷歌翻译
多跳跃知识基础问题答案(KBQA)旨在在知识库中找到答案实体,这是问题中提到的主题实体的几个啤酒花。现有基于检索的方法首先从问题中生成指令,然后使用它们来指导知识图上的多跳推理。由于指令是在整个推理过程中固定的,并且在指令生成中未考虑知识图,因此一旦错误地预测中间实体,模型就无法修改其错误。为了解决这个问题,我们提出了Kbiger(知识库迭代指令生成和推理),这是一种新颖有效的方法,可以在推理图的帮助下动态生成指令。我们没有在推理之前生成所有指令,而是考虑(k-1)推理图来构建k-th指令。通过这种方式,模型可以检查图表的预测并生成新指令,以修改中间实体的不正确预测。我们对两个多跳KBQA基准测试进行实验,并胜过现有方法,并成为新州。进一步的实验表明,我们的方法确实检测到中间实体的不正确预测,并具有修改此类错误的能力。
translated by 谷歌翻译
显微镜交通模拟为自动驾驶汽车(AVS)提供了可控,可重复且有效的测试环境。为了公正地评估AVS的安全性能,在模拟自然主义驾驶环境(NDE)中,环境统计数据的概率分布必须与现实世界中驾驶环境的统计数据一致。但是,尽管人类驾驶行为已经在运输工程领域进行了广泛的研究,但大多数现有模型都是用于交通流量分析的,而无需考虑驾驶行为的分布一致性,这可能会导致AV测试的重大评估偏见。为了填补这一研究差距,本文提出了分布一致的NDE建模框架。使用大规模的自然驾驶数据,获得了经验分布,以在不同条件下构建随机的人类驾驶行为模型。为了解决仿真过程中的误差积累问题,进一步设计了一种基于优化的方法来完善经验行为模型。具体而言,车辆状态的演变被建模为马尔可夫链,其固定分布被扭曲以匹配现实世界驾驶环境的分布。在多车道高速公路驾驶模拟的案例研究中评估了该框架,其中验证了生成的NDE的分布精度,并有效地评估了AV模型的安全性能。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译